Toshiba Develops High-Speed Algorithm and Hardware Architecture for <span style='color:red'>Deep Learning</span> Processor
Toshiba Memory Corporation today announced the development of a high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning processing with less degradations of recognition accuracy. The new processor for deep learning implemented on an FPGA achieves 4 times energy efficiency compared to conventional ones. The advance was announced at IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan on November 6.Deep learning calculations generally require large amounts of multiply-accumulate (MAC) operations, and it has resulted in issues of long calculation time and large energy consumption. Although techniques reducing the number of bits to represent parameters (bit precision) have been proposed to reduce the total calculation amount, one of proposed algorithm reduces the bit precision down to one or two bit, those techniques cause degraded recognition accuracy.Toshiba Memory developed the new algorithm reducing MAC operations by optimizing the bit precision of MAC operations for individual filters in each layer of a neural network. By using the new algorithm, the MAC operations can be reduced with less degradation of recognition accuracy.Furthermore, Toshiba Memory developed a new hardware architecture, called bit-parallel method, which is suitable for MAC operations with different bit precision. This method divides each various bit precision into a bit one by one and can execute 1-bit operation in numerous MAC units in parallel. It significantly improves utilization efficiency of the MAC units in the processor compared to conventional MAC architectures that execute in series.Toshiba Memory implemented ResNet50, a deep neural network, on an FPGA using the various bit precision and bit-parallel MAC architecture. In the case of image recognition for the image dataset of ImageNet, the above technique supports that both operation time and energy consumption for recognizing image data are reduced to 25 % with less recognition accuracy degradation, compared to conventional method.
Key word:
Release time:2018-11-07 00:00 reading:1171 Continue reading>>
SiFive announces first open-source RISC-V-based SoC platform with NVIDIA <span style='color:red'>Deep Learning</span> Accelerator technology
SiFive, a provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA’s Deep Learning Accelerator (NVDLA) technology.The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive’s HiFive Unleashed board powered by the Freedom U540, the world’s first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive’s silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.NVIDIA open-sourced its leading deep learning accelerator over a year ago to spark the creation of more AI silicon solutions. Open-source architectures such as NVDLA and RISC-V are essential building blocks of innovation for Big Data and AI solutions.“It is great to see open-source collaborations, where leading technologies such as NVDLA can make the way for more custom silicon to enhance the applications that require inference engines and accelerators,” said Yunsup Lee, co-founder and CTO, SiFive. “This is exactly how companies can extend the reach of their platforms.”“NVIDIA open sourced its NVDLA architecture to drive the adoption of AI,” said Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA. “Our collaboration with SiFive enables customized AI silicon solutions for emerging applications and markets where the combination of RISC-V and NVDLA will be very attractive.”
Key word:
Release time:2018-08-21 00:00 reading:1263 Continue reading>>
<span style='color:red'>Deep Learning</span> Next Big Chip Growth Driver
 Count Wally Rhines, semiconductor industry veteran and long-time CEO of Mentor Graphics, among the many who believe that deep-learning hardware will drive the next wave of growth for the semiconductor industry.Speaking at the GSA European Executive Forum here this week, Rhines added that memory will continue to be a key driver of the chip industry going forward. Despite the volatility of the semiconductor industry, R&D investment continues to be around 14% of revenue as it has been for the last 36 years, Rhines said, dismissing arguments put forth by some that there isn’t enough being ploughed back into R&D to maintain sustained growth.On growth, we should be watching out for China, and a lot of growth will come from visual processing for AI applications, Rhines said.He offered his perspective on the future of the semiconductor industry and why revenue forecasts from research and analyst firms have been consistently off target. With numerous historic charts and data, he highlighted the 96% error in the Semiconductor Industry Association’s three-year projection in 2002 and how analysts were off by 17% for the one-year forecast for 2017.He questioned why it was so difficult to forecast semiconductor industry revenue, especially when IC unit demand has always been predictable at around 8% per year and silicon area shipments can also be predicted and easily measured.A reason for the discrepancy more recently is because of the rise of system IC companies, whose revenue is only partially counted. “With leading system and digital companies also now becoming the new SoC designers and manufacturers, semiconductor industry forecasts are often deprived of the cost of the system,” said Rhines. “Systems companies comprised 13% of pure-play foundry sales in 2017.”Rhines asked whether the 2017 boom in semiconductor revenue was a fluke and then went on to answer his question. “When you consider the evolution of memory versus logic, memory now dominates the percentage of transistors manufactured,” he said.But, he added, when you consider that the next wave of computer architectures for visual processing and AI will require vast amounts of memory for brain-like pattern recognition, this will drive up memory volumes and, hence, semiconductor revenues.This will be dominated by areas like autonomous vehicles, with their need for visual information capture, storage, and processing with the corresponding flood of data.  In addition, AI-related companies are also the ones that tend to be funded heavily — Q4 2017 shows some huge investments in this area, with companies like Horizon Robotics raising $100 million, Graphcore raising $50 million, and Vayyar Imaging raising $45 million.Addressing the wider ecosystem, free tradeRhines was upbeat about the future prospects of the semiconductor industry, and the conference generally understood that there was a new reality for the industry. Sandro Grigolli, EMEA executive director, said at the opening session that the rise of companies not previously part of the industry, like Google, Amazon and Facebook, meant that the traditional semiconductor industry needed to figure out a way to “play gracefully with these companies.”“We are part of a larger ecosystem, and the GSA needs to change to accommodate this, which is what we are doing with the introduction of automotive and IoT security interest groups,” Grigolli said.On the threats of the new companies entering the chip industry, Faraj Aalaei, president and CEO of Aquantia, wasn’t too worried. “These companies are not a long-term threat to the semiconductor companies — over time, they’ll see the value in the learnings and experience [of established semi companies]. If your customer can build a better chip than you, you don’t deserve to be in the chip business — it just means your edge is not as good as you think it is.”Aalaei also said that the biggest problem with the semiconductor industry is Moore’s law. “By always referring to this law, the customer interprets this as, ‘I’m going to get the same product in 12 months’ time at half the price. So we need to stop talking about Moore’s law.”The tensions in free trade also came up, with Helmut Gassel, chief marketing officer and board member of Infineon Technologies, commenting, “In San Diego, two weeks ago at the World Semiconductor Council, we agreed that we need to work hard to ensure free trade across the globe and to convince politicians that trade wars are not good for growth.”
Key word:
Release time:2018-06-11 00:00 reading:1108 Continue reading>>
Intel/Saffron AI Plan Sidesteps <span style='color:red'>Deep Learning</span>
  Intel’s $1 billion investment in the AI ecosystem is one of the well-publicized talking points at the processor company. The Intel empire boasts a breadth of AI technologies it has amassed by acquisition and Intel Capital investments in AI startups.  The acquired companies seemingly useful to Intel’s AI ambitions thus far include Altera (2015), Saffron (2015), Nervana (2016), Movidius (2016) and Mobileye (2017). Intel Capital has also fattened its AI portfolio with startups Mighty AI, Data Robot, Lumiata, CognitiveScale, Aeye Inc., Element AI and others.  Unclear is how Intel is going to stitch all this together.  With AI innovation still in its early days, Intel’s apparent scattershot approach to AI strategy might be justified. We might still have to wait a while for a more coherent narrative to emerge.  Intel has talked up its AI hardware portfolio more often than its overall AI strategy. Typically, Intel announced this week that it will shipbefore the end of the year the Nervana Neural Network Processor (NNP), formerly known as “Lake Crest.” Naveen Rao, formerly CEO and cofounder of Nervana, and now Intel’s vice president and general manager of AI products, describes NNP as featuring “a purpose built architecture for deep learning.”  Intel has other ammunition when it comes to AI chips, including Xeon family, FPGAs (from Altera), Mobileye (for automotive) and Movidius (for machine learning at the edge).  However, Intel has been reticent about AI applications or exactly which fields of AI where it will focus. AI is a realm both broad and deep. Among Intel’s sprawl of acquisitions, the biggest mystery might be Saffron.  It was hard not to notice earlier this month when Intel announced a new product called the Intel Saffron Anti-Money Laundering (AML) Advisor. The product isn’t hardware, although AML apparently runs on Xeon processors, but a tool for investigators and analysts to ferret out financial crimes.  Earlier this week, EE Times had a chat with Elizabeth Shriver-Procell, director, financial industry solutions at Saffron Technology, to learn about the AI technologies behind Saffron’s product, and what she sees as Saffron’s gain in becoming an Intel company.  Mainly, though, we wanted to know what a long-time financial crime-fighter like Shriver-Procell is doing inside the world’s largest CPU company.  EE Times: Tell us a little bit about yourself. I hear you are an expert on financial analytics, working at various companies and agencies including the Treasury Department.  Shriver-Procell: I am a lawyer, with the focus of my work on fighting financial crimes. I’ve worked at international consultancies and various financial institutions. Most recently, I was at Bank of America. I joined Saffron earlier this year. Yes, I also worked at the U.S. Treasury as a program manager for analytics development.  EE Times: So before coming to Saffron, did you use Saffron’s products?  Shriver-Procell: Some organizations I was associated with — including some clients at consulting companies — have used Saffron. I’ve been intrigued by the platform, so when this opportunity came up, I took it.  EE Times: So, what exactly does Saffron offer?  Shriver-Procell: Saffron was always sold and marketed as an ‘analytic platform’ customizable for broader applications. Users include supply chains, banks and insurance companies.  EE Times: With the launch of Intel Saffron Anti-Money Laundering Advisor, has anything changed in Saffron’s platform approach?  Shriver-Procell: We’re now rolling out specific products for specific applications.  Different branch of AI  EE Times: I suspect the primary reason for Intel to acquire Saffron was more to do with getting their hands on Saffron’s AI technologies, rather than solving financial crimes (although it’s a worthy cause). Tell us a little bit about what kind of AI expertise Saffron has designed and uses for what you do. And how is that different from other AI technologies?  Shriver-Procell: At Saffron, the AI technology we use is called Associative Memory AI, which is a different branch of artificial intelligence from Deep Learning. Associative memory AI is very good at looking at a large volume of data – and a high variety of data – and discern signatures or patterns out of databases that are so far apart. It unifies structured and unstructured data from enterprise systems, email, web and other data sources.  EE Times: Give us examples.  Shriver-Procell: Take the example of a banking customer named Mary. Mary goes to London every other week and shops at Liberty store. John, who lives in a different country goes to London about the same time when Mary is there, and does something entirely different. Is there any relationship between the two? What are the commonalities between the two? Can we take a look at their IP addresses? Do we find any similarities in their log-in patterns? Is there anything that shows if any nefarious activities are going on there?  EE Times: So, the point is that Associative Memory AI can take a look at so many seemingly unrelated databases at the same time.  Shriver-Procell: Not only that, it gets the job — otherwise very time-consuming — done very quickly. While it takes a lot of training for Deep Learning to work, Associative Memory AI does not need to be trained. This AI does rapid, one-shot learning. It’s a model-free AI.  EE Times: In the press release, you talk about Saffron’s “white box AI.” Please explain.  Shriver-Procell: By “white box AI,” we’re talking about transparency. We can explain how we have arrived at a certain conclusion. In the past, financial institutions acquired a model-based, vendor-supplied solution for fraud detections. We call it a “black box” because users have no idea how their software has worked inside the black box. When regulators ask financial institutions how they came to a conclusion, they can’t really explain it. They can’t see what’s inside the black box, and they can’t tell if it was working properly.  In highly regulated industries, it’s critical for financial institutions to be able to provide transparency in their data.  EE Times: Interesting. That sounds like almost the opposite of Deep Learning AI. Some safety experts worry that when Deep Learning AI deployed in autonomous vehicles makes a certain decision turning a corner, for example, carmakers can’t explain why the AI made its decision. The lack of transparency in the learning process makes it tough for carmakers to validate the safety of autonomous cars.  Shriver-Procell: I think it’s important to recognize that there are different approaches to AI. When Intel’s CEO talks about unlocking the promise of AI, he says that we may try new things. We need to explore new learning paradigms.  EE Times: Do you see that any of those different branches of AI converging at one point?  Shriver-Procell: I think they are complementary. As we see a growing trend for blending of applications, I think multiple types of AI will be able to address the needs presented by a spectrum of applications.  EE Times: Tell us more about your new products.  Shriver-Procell: As I said before, Saffron always sold its product as a platform. Now, as we are finding specific needs in specific market segments, we’ve decided to roll out a specific solution as a product that can meet the market’s challenges.  Saffron has always held a very strong position in the financial market backed by its experience in finding financial crimes. By unifying structured and unstructured data linked into a 360-degree view, we can make sense of the patterns found across boundaries wherever the data is stored.  We also announced that the Bank of New Zealand has just joined the Intel Saffron Early Adopter Program. This is designed for those institutions interested in innovation in financial services by taking advantage of the latest advancements in associative memory artificial intelligence.  EE Times: What do you think Saffron has gained by becoming an Intel company?  Shriver-Procell: The benefits of joining Intel are great. We’re talking about serious problems that large financial institutions are fighting. In order to be able to support them, you need all the power and support that a large corporation like Intel brings to bear. We also need the full support of Intel as a technology partner as we create new capabilities and applications on the Saffron platform, and make them scalable and extensible. As AI rapidly advances, you can’t overlook the significance of exploring new things, new ways to do AI.  AI in its infancy  After the interview with Saffron, EE Times got in touch with a few analysts to see how they view the state of AI technology development.  Jim McGregor, founder and principal analyst at Tirias Research, observed, “There are many different types of learning (supervised, unsupervised), different types of digital neural networks (deep learning; holographic associated memory — also referred to as just associative memory, inference models), different hardware solutions for AI (CPUs, GPUS, DSPs, FPGAs, TPUs, Quantum processors), and a plethora of different software frameworks. So, mapping out all the AI solutions is like mapping out a tree that has new branches springing out every day.”  Paul Teich, principal analyst at Tirias Research, concurred. “New classes of learning and AI algorithms are still emerging at a frightening rate.” He added, “That means we are still fairly far away from locking in efficient full-custom silicon. General purpose silicon rules during times of radical change. That is why GPUs, FPGAs, and coprocessor style matrix math accelerators (NVIDIA's Tensor Core and Google's TPU2 are in this bucket) will dominate until we get farther down the road in selecting best in class algorithms and best practices for model development and deployment.”  Do we see some of those different branches of AI in the future working together?  McGregor said, “This is a great question.” As he sees it now, “Most of the effort is being put on Centralized Intelligence and Hybrid Intelligence, where everything is done in the cloud for split between the cloud for learning and the edge devices for inference. A few companies like Microsoft are working on distributed intelligence where the intelligence can be spread amongst multiple resources, such as data centers for deep learning.”  In his opinion, “The future will require Collective Intelligence where all these intelligent solutions work together. We do see the future as being one of collective intelligence.” But he noted, “When and how we get there has yet to be determined. Which solution has priority? What do you do when these solutions do not agree? How do you collectively share information between drastically different frameworks and neural networks (even creating two neural networks that look the same using the same data is next to impossible)? These are all issues that will have to be worked out.”  McGregor added, “I'm not surprised to see Intel starting with [Saffron], because the financial industry will be one of the industries that drive us toward collective intelligence because of its importance to the global economy.”  Now that Intel is offering Saffron’s Anti-Money Laundering Advisor as “a product” in the financial market, does this mean that Intel is taking a step — somewhat akin to IBM — toward a “service business model” rather than just sticking to the chip business?  McGregor believes it is. “Intel has done this before and tends to swing back and forth between being a solutions vendor and a technology vendor, but in the case of AI, you almost have to be a solutions vendor because of the need for both hardware and software, and Intel has invested in both.”
Release time:2017-10-24 00:00 reading:1060 Continue reading>>
IBM Uses <span style='color:red'>Deep Learning</span> to Train Raspberry Pi
  Computations requiring high performance computing (HPC) power may soon be done in the palm of your hand thanks to work done this summer by IBM Research in Dublin, Ireland.  While scientists have come a long away in teaching machines how to process images for facial recognition and understand language to translate texts, IBM researchers focused on a different problem: how to use artificial intelligence (AI) techniques to forecast a physical process. In this case, the focus was on ocean waves, using traditional physics-based models driven by external forces, such as the rise and fall of tides, winds blowing in different directions, the depth and physical properties of water influence the speed and height of the waves.  HPC is normally essential to resolve the differential equations that encapsulate these physical processes and their relationships, and the expense often limits the spatial resolution, physical processes and time-scales that can be investigated by a real-time forecasting platform. In an interview with EE Times, IBM Research Senior Research Manager Sean McKenna said an HPC cluster using Big Iron has generally been the solution to dealing with the heavy computational load. IBM Research wanted to see if it could do the same work more quickly and more simply, he said.  The differential equations approach has developed over the course of a century or more, he said. Machine learning through AI is not rule based. “It's non-linear mapping of one input space to an output space," McKenna said. "That's what everything is in AI right now."  Researchers developed a deep-learning framework that provides a 12,000 percent acceleration over these physics-based models at comparable levels of accuracy. McKenna said the validated deep-learning framework can be used to perform real-time forecasts of wave conditions using available forecasted boundary wave conditions, ocean currents, and winds.  “The deep learning method is more of a black box," he said. "It's a little bit of paradigm shift."  Deep learning isn't about physical modeling and science to figure out what's leading to a set of results, it's about using engineering to solve a problem, and being able to do it more efficiently and faster, said McKenna. “We can build a model, train that model and put in on a more computationally-efficient device," he said.  What is clear are the significant benefits. Massively reducing the computational expense means simulations can be done on a Raspberry Pi rather HPC infrastructure.  The deep-learning framework was trained to forecast wave conditions at a case-study site at Monterey Bay, Calif., using the physics-based Simulating WAves Nearshore (SWAN) model to generate training data for the deep learning network. Driven by measured wave conditions, ocean currents from an operational forecasting system, and wind data, the model was run between the beginning of April 2013 and end of July 2017, generating forecasts at three-hour intervals to provide a total of 12,400 distinct model outputs. The study expands and builds on a collaboration between IBM Research-Ireland, Baylor University and the University of Notre Dame.  The deep learning model has yet to be deployed to a physical device, said McKenna, but the study demonstrates that the reduction in computational expense means the simulation of a physics model could be done an Raspberry Pi or any other low-end computing device that's trained by HPC.  “That opens up possibilities as to where that model can be deployed," McKenna said.  Being able to accurately forecast ocean wave heights and directions are a valuable resource for many marine-based industries as they often operate in harsh environments where power and computing facilities are limited. One scenario includes a shipping company using highly accurate forecasts to determine the best voyage route in rough seas to minimize fuel consumption or travel time. A surfer could get data localized to a specific beach to ride the best waves, said McKenna.  IBM Research's deep learning model could potentially be leveraged to use existing HPC infrastructure to train cheaper computing devices, even a smartphone, he said. “HPC resources are becoming more available in the cloud, so even if you don't own that resource you probably have access to it," he said.
Key word:
Release time:2017-09-29 00:00 reading:1237 Continue reading>>
IBM <span style='color:red'>Deep Learning</span> Breaks Through
  IBM Research has reported an algorithmic breakthrough for deep learning that comes close to achieving the holy grail of ideal scaling efficiency: Its new distributed deep-learning (DDL) software enables a nearly linear speedup with each added processor (see figure). The development is intended to achieve similar speedups for each server added to IBM’s DDL algorithm.  The aim “is to reduce the wait time associated with deep-learning training from days or hours to minutes or seconds,” according to IBM fellow and Think Blogger Hillery Hunter, director of the Accelerated Cognitive Infrastructure group at IBM Research.  Hunter notes in a blog post on the development that “most popular deep-learning frameworks scale to multiple GPUs in a server, but not to multiple servers with GPUs.” The IBM team “wrote software and algorithms that automate and optimize the parallelization of this very large and complex computing task across hundreds of GPU accelerators attached to dozens of servers,” Hunter adds.  IBM claims test results of 95 percent scaling efficiency for up to 256 Nvidia Tesla P100 GPUs added to a single server using the open-source Caffe deep-learning framework. The results were calculated for image recognition learning but are expected to apply to similar learning tasks. IBM achieved the nearly linear scaling efficiency in 50 minutes of training time. Facebook Inc. previously achieved 89 percent efficiency in 60 minutes of training time on the same data set.  IBM is also claiming a validation accuracy record of 33.8 percent on 7.5 million images in just seven hours of training on the ImageNet-22k data set, compared with Microsoft Corp.’s previous record of 29.8 percent accuracy in 10 days of training on the same data set. IBM’s processor was its PowerAI platform — a 64-node Power8 cluster (plus the 256 Nvidia GPUs) — providing more than 2 petaflops of single-precision floating-point performance.  The company is making its DDL suite available free to any PowerAI platform user. It is also offering third-party developers a variety of application programming interfaces to let them select the underlying algorithms that are most relevant to their application.
Key word:
Release time:2017-08-14 00:00 reading:1060 Continue reading>>
Nvidia Takes <span style='color:red'>Deep Learning</span> to School
  Deep learning is "transforming computing" is the message that Nvidia hopes to hammer home at its GPU Tech conference. On that theme, Nvidia has styled itself as a firebrand, catalyst and deep learning enabler and — in the long run — a deep profiteer.  Among the telltale signs that Nvidia is betting its future on this branch of artificial intelligence (AI) is the recent launch of its “Deep Learning Institute,” with plans increase the number of developers to 100,000 this year. Nvidia trained 10,000 developers in deep learning in 2016.  Over the last few years, AI has made inroads into “all parts of science,” said Greg Estes, Nvidia’s vice president responsible for developer programs. AI is becoming an integral component of “all applications ranging from cancer research, robotics, manufacturing to financial services, fraud detection and intelligent video analysis,” he noted.  Nvidia wants to be known as the first resort for developers creating apps that use AI as a key component, said Estes.  “Deep learning” is in the computer science curriculum at many schools, but few universities offer a degree specifically in AI.  At its Deep Learning Institute, Nvidia plans to deliver “hands-on” training to “industry, government and academia,” according to Estes, with a  mission to “help developers, data scientists and engineers to get started in training, optimizing, and deploying neural networks to solve the real world problems in diverse disciplines.”  How can you fit AI into apps?  Kevin Krewell, principal analyst at Tirias Research, told EE Times, “The challenge of Deep Learning is that it’s just hard to get started and figuring out how to fit it into traditional app development.”  He noted, “I think Nvidia is trying to get a wider set of developers trained on how to fit machine learning (ML) into existing development programs. Unlike traditional programs where algorithms are employed to perform a task, ML is a two-stage process with a training phase and a deployment phase.”  Nvidia’s edge is that “Machine learning performs better with an accelerator like a GPU, rather than relying just on the CPU,” Krewell added.  As Nvidia readies its Deep Learning Institute, the company is also entering a host of partnership deals with AI “framework” communities and universities. Among them are Facebook, Amazon, Google (Tensor Flow), the Mayo Clinic, Stanford University and Udacity.  Such collaborations with framework vendors are critical, because every developer working on AI apps needs to have cloud and deep learning resources.  As Jim McGregor, principal analyst at Tirias Research, told us, “The most difficult thing for app developers are the cloud resources and large data sets. As an example, the mobile suppliers are promoting machine learning on their devices, but to develop apps for those devices you need cloud/deep learning resources and the data sets to train those resources, which the mobile players are not providing.”  Nvidia can provide the hardware resources and a mature software model, but “developers still need the service provider and data sets,” McGregor added.  According to Nvidia, the company is also working with Microsoft Azure, IBM Power and IBM Cloud teams to port lab content to their cloud solutions.
Key word:
Release time:2017-05-10 00:00 reading:1153 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
model brand To snap up
STM32F429IGT6 STMicroelectronics
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
TPS63050YFFR Texas Instruments
ESR03EZPJ151 ROHM Semiconductor
BP3621 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code